An Extension of Karmarkar Type Algorithm to a Class of Convex Separable Programming Problems with Global Linear Rate of Convergence

نویسندگان

  • Renato D. C. Monteiro
  • Ilan Adler
چکیده

We describe a primal-dual interior point algorithm for a class of convex separable programming problems subject to linear constraints. Each iteration updates a penalty parameter and finds a Newton step associated with the Karush-Kuhn-Tucker system of equations which characterizes a solution of the logarithmic barrier function problem for that parameter. It is shown that the duality gap is reduced at each iteration by a factor of (1 8/ n), where 8 is positive and depends on some parameters associated with the objective function.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Modify the linear search formula in the BFGS method to achieve global convergence.

<span style="color: #333333; font-family: Calibri, sans-serif; font-size: 13.3333px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: justify; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: #ffffff; text-dec...

متن کامل

Global convergence of an inexact interior-point method for convex quadratic‎ ‎symmetric cone programming‎

‎In this paper‎, ‎we propose a feasible interior-point method for‎ ‎convex quadratic programming over symmetric cones‎. ‎The proposed algorithm relaxes the‎ ‎accuracy requirements in the solution of the Newton equation system‎, ‎by using an inexact Newton direction‎. ‎Furthermore‎, ‎we obtain an‎ ‎acceptable level of error in the inexact algorithm on convex‎ ‎quadratic symmetric cone programmin...

متن کامل

A Recurrent Neural Network for Solving Strictly Convex Quadratic Programming Problems

In this paper we present an improved neural network to solve strictly convex quadratic programming(QP) problem. The proposed model is derived based on a piecewise equation correspond to optimality condition of convex (QP) problem and has a lower structure complexity respect to the other existing neural network model for solving such problems. In theoretical aspect, stability and global converge...

متن کامل

A generalized implicit enumeration algorithm for a class of integer nonlinear programming problems

Presented here is a generalization of the implicit enumeration algorithm that can be applied when the objec-tive function is being maximized and can be rewritten as the difference of two non-decreasing functions. Also developed is a computational algorithm, named linear speedup, to use whatever explicit linear constraints are present to speedup the search for a solution. The method is easy to u...

متن کامل

An efficient one-layer recurrent neural network for solving a class of nonsmooth optimization problems

Constrained optimization problems have a wide range of applications in science, economics, and engineering. In this paper, a neural network model is proposed to solve a class of nonsmooth constrained optimization problems with a nonsmooth convex objective function subject to nonlinear inequality and affine equality constraints. It is a one-layer non-penalty recurrent neural network based on the...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Math. Oper. Res.

دوره 15  شماره 

صفحات  -

تاریخ انتشار 1990